30 research outputs found

    Underpinning Quality Assurance: Identifying Core Testing Strategies for Multiple Layers of Internet-of-Things-Based Applications

    Get PDF
    The Internet of Things (IoT) constitutes a digitally integrated network of intelligent devices equipped with sensors, software, and communication capabilities, facilitating data exchange among a multitude of digital systems via the Internet. Despite its pivotal role in the software development life-cycle (SDLC) for ensuring software quality in terms of both functional and non-functional aspects, testing within this intricate software–hardware ecosystem has been somewhat overlooked. To address this, various testing techniques are applied for real-time minimization of failure rates in IoT applications. However, the execution of a comprehensive test suite for specific IoT software remains a complex undertaking. This paper proposes a holistic framework aimed at aiding quality assurance engineers in delineating essential testing methods across different testing levels within the IoT. This delineation is crucial for effective quality assurance, ultimately reducing failure rates in real-time scenarios. Furthermore, the paper offers a mapping of these identified tests to each layer within the layered framework of the IoT. This comprehensive approach seeks to enhance the reliability and performance of IoT-based applications

    CGST: Provably Secure Lightweight Certificateless Group Signcryption Technique Based on Fractional Chaotic Maps

    Get PDF
    In recent years, there has been a lot of research interest in analyzing chaotic constructions and their associated cryptographic structures. Compared with the essential combination of encryption and signature, the signcryption scheme has a more realistic solution for achieving message confidentiality and authentication simultaneously. However, the security of a signcryption scheme is questionable when deployed in modern safety-critical systems, especially as billions of sensitive user information is transmitted over open communication channels. In order to address this problem, a lightweight, provably secure certificateless technique that uses Fractional Chaotic Maps (FCM) for group-oriented signcryption (CGST) is proposed. The main feature of the CGST-FCM technique is that any group signcrypter may encrypt data/information with the group manager (GM) and have it sent to the verifier seamlessly. This implies the legitimacy of the signcrypted information/data is verifiable using the public conditions of the group, but they cannot link it to the conforming signcrypter. In this scenario, valid signcrypted information/data cannot be produced by the GM or any signcrypter in that category alone. However, the GM is allowed to reveal the identity of the signcrypter when there is a legal conflict to restrict repudiation of the signature. Generally, the CGST-FCM technique is protected from the indistinguishably chosen ciphertext attack (IND-CCA). Additionally, the computationally difficult Diffie-Hellman (DH) problems have been used to build unlinkability, untraceability, unforgeability, and robustness of the projected CGST-FCM scheme. Finally, the security investigation of the presented CGST-FCM technique shows appreciable consistency and high efficiency when applied in real-time security applications

    A New Multistage Encryption Scheme Using Linear Feedback Register and Chaos-Based Quantum Map

    Get PDF
    With the increasing volume of data transmission through insecure communication channels, big data security has become one of the important concerns in the cybersecurity domain. To address these concerns and keep data safe, a robust privacy-preserving cryptosystem is necessary. Such a solution relies on chaos encryption algorithms over standard cryptographic methods that possess multistage encryption levels, including high speed, high security, low compute overheads, and procedural power, among other characteristics. In this work, a secure image encryption scheme is proposed using linear feedback shift register (LFSR) and chaos-based quantum chaotic map. e focus of the scheme is mainly dependent on the secret keys from the input of the algorithm. e threat landscape, the statistical test analysis, along critical comparisons with other schemes indicate that the presented algorithm is significantly secure and is resistant to a wide range of different attacks such as differential and statistical attacks. e proposed method has sufficiently higher sensitivity and security when compared to existing encryption algorithms. Several security parameters validated the security of proposed work such as correlation coefficient analyses among the neighboring pixels, entropy, the number of pixels change rate (NPCR), unified average change intensity (UACI), mean square error (MSE), brute force, key sensitivity, and peak signal to noise ratio (PSNR) analyses. e randomness of the ciphers produced by the proposed technique is also passed through NIST-800-22. e results of NIST indicate that the ciphers are highly random and do not produce any type of periodicity or pattern

    Genome-Wide Analysis of the Emerging Infection with Mycobacterium avium Subspecies paratuberculosis in the Arabian Camels (Camelus dromedarius)

    Get PDF
    Mycobacterium avium subspecies paratuberculosis (M. ap) is the causative agent of paratuberculosis or Johne's disease (JD) in herbivores with potential involvement in cases of Crohn's disease in humans. JD is spread worldwide and is economically important for both beef and dairy industries. Generally, pathogenic ovine strains (M. ap-S) are mainly found in sheep while bovine strains (M. ap-C) infect other ruminants (e.g. cattle, goat, deer), as well as sheep. In an effort to characterize this emerging infection in dromedary/Arabian camels, we successfully cultured M. ap from several samples collected from infected camels suffering from chronic, intermittent diarrhea suggestive of JD. Gene-based typing of isolates indicated that all isolates belong to sheep lineage of strains of M. ap (M. ap-S), suggesting a putative transmission from infected sheep herds. Screening sheep and goat herds associated with camels identified the circulation of this type in sheep but not goats. The current genome-wide analysis recognizes these camel isolates as a sub-lineage of the sheep strain with a significant number of single nucleotide polymorphisms (SNPs) between sheep and camel isolates (∼1000 SNPs). Such polymorphism could represent geographical differences among isolates or host adaptation of M. ap during camel infection. To our knowledge, this is the first attempt to examine the genomic basis of this emerging infection in camels with implications on the evolution of this important pathogen. The sequenced genomes of M. ap isolates from camels will further assist our efforts to understand JD pathogenesis and the dynamic of disease transmission across animal species

    Abstracts from the 3rd International Genomic Medicine Conference (3rd IGMC 2015)

    Get PDF

    Predicting Rogue Content and Arabic Spammers on Twitter

    No full text
    Twitter is one of the most popular online social networks for spreading propaganda and words in the Arab region. Spammers are now creating rogue accounts to distribute adult content through Arabic tweets that Arabic norms and cultures prohibit. Arab governments are facing a huge challenge in the detection of these accounts. Researchers have extensively studied English spam on online social networks, while to date, social network spam in other languages has been completely ignored. In our previous study, we estimated that rogue and spam content accounted for approximately three quarters of all content with Arabic trending hashtags in Saudi Arabia. This alarming rate, supported by autonomous concurrent estimates, highlights the urgent need to develop adaptive spam detection methods. In this work, we collected a pure data set from spam accounts producing Arabic tweets. We applied lightweight feature engineering based on rogue content and user profiles. The 47 generated features were analyzed, and the best features were selected. Our performance results show that the random forest classification algorithm with 16 features performs best, with accuracy rates greater than 90%

    Latency-Aware Accelerator of SIMECK Lightweight Block Cipher

    No full text
    This article presents a latency-optimized implementation of the SIMECK lightweight block cipher on a field-programmable-gate-array (FPGA) platform with a block and key lengths of 32 and 64 bits. The critical features of our architecture include parallelism, pipelining, and a dedicated controller. Parallelism splits the digits of the key and data blocks into smaller segments. Then, we use each segmented key and data block in parallel for encryption and decryption computations. Splitting key and data blocks helps reduce the required clock cycles. A two-stage pipelining is used to shorten the critical path and to improve the clock frequency. A dedicated controller is implemented to provide control functionalities. For the performance evaluation of our design, we report implementation results for two different cases on Xilinx 7-series FPGA devices. For our case one, the proposed architecture can operate on 382, 379, and 388 MHz frequencies for Kintex-7, Virtex-7, and Artix-7 devices. On the same Kintex-7, Virtex-7, and Artix-7 devices, the utilized Slices are 49, 51, and 50. For one encryption and decryption computation, our design takes 16 clock cycles. The minimum power consumption is 172 mW on the Kintex-7 device. For the second case, we targeted the same circuit frequency of 50 MHz for synthesis on Kintex-7, Virtex-7, and Artix-7 devices. With minimum hardware resource utilization (51 Slices), the least consumed power of 13.203 mW is obtained for the Kintex-7 device. For proof-of-concept, the proposed SIMECK design is validated on the NEXYS 4 FPGA with the Artix-7 device. Consequently, the implementation results reveal that the proposed architecture is suitable for many resource-constrained cryptographic applications

    The Reality of Internet Infrastructure and Services Defacement: A Second Look at Characterizing Web-Based Vulnerabilities

    No full text
    In recent years, the number of people using the Internet has increased worldwide, and the use of web applications in many areas of daily life, such as education, healthcare, finance, and entertainment, has also increased. On the other hand, there has been an increase in the number of web application security issues that directly compromise the confidentiality, availability, and integrity of data. One of the most widespread web problems is defacement. In this research, we focus on the vulnerabilities detected on the websites previously exploited and distorted by attackers, and we show the vulnerabilities discovered by the most popular scanning tools, such as OWASP ZAP, Burp Suite, and Nikto, depending on the risk from the highest to the lowest. First, we scan 1000 URLs of defaced websites by using three web application assessment tools (OWASP ZAP, Burp Suite, and Nikto) to detect vulnerabilities which should be taken care of and avoided when building and structuring websites. Then, we compare these tools based on their performance, scanning time, the names and number of vulnerabilities, and the severity of their impact (high, medium, low). Our results show that Burp Suite Professional has the highest number of vulnerabilities, while Nikto has the highest scanning speed. Additionally, the OWASP ZAP tool is shown to have medium- and low-level alerts, but no high-level alerts. Moreover, we detail the best and worst uses of these tools. Furthermore, we discuss the concept of Domain Name System (DNS), how it can be attacked in the most common ways, such as poisoning, DDOS, and DOS, and link it to our topic on the basis of the importance of its infrastructure and how it can be the cause of hacking and distorting sites. Moreover, we introduce the tools used for DNS monitoring. Finally, we give recommendations about the importance of security in the community and for programmers and application developers. Some of them do not have enough knowledge about security, which allow vulnerabilities to occur

    DAMAC: A Delay-Aware MAC Protocol for Ad Hoc Underwater Acoustic Sensor Networks

    No full text
    In a channel shared by several nodes, the scheduling algorithm is a key factor to avoiding collisions in the random access-based approach. Commonly, scheduling algorithms can be used to enhance network performance to meet certain requirements. Therefore, in this paper we propose a Delay-Aware Media Access Control (DAMAC) protocol for monitoring time-sensitive applications over multi-hop in Underwater Acoustic Sensor Networks (UASNs), which relies on the random access-based approach where each node uses Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) to determine channel status, switches nodes on and off to conserve energy, and allows concurrent transmissions to improve the underwater communication in the UASNs. In addition, DAMAC does not require any handshaking packets prior to data transmission, which helps to improve network performance in several metrics. The proposed protocol considers the long propagation delay to allow concurrent transmissions, meaning nodes are scheduled to transmit their data packets concurrently to exploit the long propagation delay between underwater nodes. The simulation results show that DAMAC protocol outperforms Aloha, BroadcastMAC, RMAC, Tu-MAC, and OPMAC protocols under varying network loads in terms of energy efficiency, communication overhead, and fairness of the network by up to 65%, 45%, and 726%, respectively

    Website Defacement Detection and Monitoring Methods: A Review

    No full text
    Web attacks and web defacement attacks are issues in the web security world. Recently, website defacement attacks have become the main security threats for many organizations and governments that provide web-based services. Website defacement attacks can cause huge financial and data losses that badly affect the users and website owners and can lead to political and economic problems. Several detection techniques and tools are used to detect and monitor website defacement attacks. However, some of the techniques can work on static web pages, dynamic web pages, or both, but need to focus on false alarms. Many techniques can detect web defacement. Some are based on available online tools and some on comparing and classification techniques; the evaluation criteria are based on detection accuracies with 100% standards and false alarms that cannot reach 1.5% (and never 2%); this paper presents a literature review of the previous works related to website defacement, comparing the works based on the accuracy results, the techniques used, as well as the most efficient techniques
    corecore